Commentary: A Way to Protect Kids Online That Passes Constitutional Muster

by Kara Frederick and Joel Thayer

 

bipartisan group of senators is about to take Big Tech CEOs to task on Jan. 31, 2024, by having them publicly address their failures to protect kids online. And the CEOs need to! The harms social media poses to children are well documented and, at this point, indisputable—even by the companies themselves.

YouTube admits that it hosts harmful content for children and even calls for legislation to address the problems it helps create. YouTube’s CEO indicated as much when he published his “principled approach for children and teenagers.”

In it, he writes that YouTube “can’t do it alone … [and] support[s] policymakers, families, researchers, companies, and experts coming together to define a set of consistent standards for companies serving young people online.” He then advocates for policies that provide more parental rights, age-appropriate content for minors, and “appropriate safeguards.”

Even with this call for legislation, policymakers are finding it hard to impose basic measures, such as age restriction requirements to protect kids using these services.

Age restrictions on a deliberately addictive product that targets children should be a no-brainer. Various levels of government impose age restrictions on these types of offerings all the time. For example, kids can’t get tattoos or piercings without parental consent. Or attend movies that are rated R or PG-13. Or purchase video games rated “M.” But somehow, social media services are distinct, even though they trigger a similar addictive response as nicotine withdrawal and gambling.

So, why haven’t we done it?

State legislatures are making unforced errors by not considering the full internet stack—e.g., the operating system, the app store, or the app—when imposing such measures. Arkansas’ social media law is a prime example of an unforced error. Its law would have required some, but not all, social media companies to verify the age of their users. The law didn’t apply to YouTube, for example.

Failing to take a holistic approach when crafting legislation for the social media market unnecessarily opened the state up to a First Amendment challenge. This is because for the state to require such a measure, it would have to show that the requirement is both “narrowly tailored” to not impact adult users’ speech and also necessary to address the harms alleged.

According to federal District Court Judge Timothy Brooks, the state failed because excluding some social media companies made little sense if the law’s stated goal is to protect kids from the harms social media imparts. Brooks pointed out that even though “YouTube is not regulated by [Arkansas social media law],” the state oddly cited YouTube as being particularly harmful to children and “[a]mong all types of online platforms, YouTube was the most widely used by children.”

So, if Arkansas’ law doesn’t apply to YouTube, how can the state justify an imposition on adult speech on services that aren’t even favored by most children?

But it’s not all bad news. Brooks may have provided legislatures a path forward to get age verification without implicating the First Amendment—by going through the app stores.

Brooks seemed to be more upset that Arkansas’ law imposed duties on social media companies rather than on Google and Apple, the companies that run two of the biggest app stores on the internet. Throughout his opinion, he noted that Apple and Google provide parents several tools to shield kids from certain apps and wondered why the state needed to go through social media platforms to do what the app stores ought to.

He’s got a point. Social media apps rely on users to self-certify their age when they create an account, but they can’t fully confirm that the user is being truthful—whereas Apple and Google know the precise age of the owner of the device. How? Well, their software is integrated at every level of the mobile device. They own the two dominant mobile operating systems (i.e., iOS and Android, respectively), app stores (i.e., the App Store and Play Store), and browsers (i.e., Safari and Chrome).

Candidly, not including Apple and Google in age verification legislation makes little sense, because doing so would almost certainly resolve the First Amendment concern.

Why? Because all the law would have to require is for Apple and Google to give the app a thumbs up or thumbs down when a social media app asks to verify the device is owned by an adult or a child. The social media company would no longer have to guess how old the user is, which limits, or even eliminates, the risk of denying an adult the ability to engage on American social media platforms.

What’s more, going through the app stores makes parental consent more doable, because, as the court noted, parents have more control over the device than the apps themselves and the app stores are already required to obtain express parental consent for in-app purchases to comply with consent decrees from the Federal Trade Commission.

If the child’s device’s app store prohibits the child from downloading apps without a parent’s credentials (e.g., entering a password, providing a fingerprint, or using facial recognition), then parents immediately get more agency over the apps their children can access.

Better yet, it wouldn’t require the consumer to provide more information about the user to social media companies. If social media companies could simply send a request to the device (leveraging the data already available from the device’s operating system) to confirm (almost instantaneously) that a person is above the age of 18, then two things occur: 1) The user doesn’t provide more data other than what he has already provided to Apple or Google; and 2) the social media company only gets a thumbs up or a thumbs down in response to its inquiry, which limits the amount of new information the company receives.

State legislators should learn from these mistakes in order to effectively protect kids online. The biggest lesson so far is to look to both the apps and the app stores when it comes to age restrictions.

– – –

Kara Frederick is the director of the Tech Policy Center at The Heritage Foundation. Joel Thayer is the president of the Digital Progress Institute.

 

 


Appeared at and reprinted from DailySignal.com

Related posts

One Thought to “Commentary: A Way to Protect Kids Online That Passes Constitutional Muster”

  1. Sim

    “Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other”.
    John Adams
    You can not “Legislate” the morality necessary for the Constitution to continue in it’s present form, just as Mr Adam stated.

    “RIGHTEOUSNESS/FREEDOMS” will have to be “SURRENDERED” to an “AMENDED” document “AUTHORED BY HELL before any Legislation will become “LEGAL”.

    West Virginia State Board of Education v Barnette, 319 US 624, 638; 63 S Ct 1178; 87 L Ed 1628 (1943)
    compared with Romer v Evans, 517 US 620; 116 S Ct 1620; 134 L Ed 2d 855 (1996) reveals –

    “The very purpose of a Bill of Rights was to withdraw certain subjects from the vicissitudes of political controversy, to place them beyond the reach of majorities and officials and to establish them as legal principles to be applied by the courts. One’s right to life, liberty and property, to free speech, a free press, freedom of worship and assembly, and other fundamental rights may not be submitted to vote; they depend on the outcome of no election.”

    When “CITIZENS” rejoice at the right to establish “SLAUGHTERHOUSES FOR CHILDREN”,

    “GOD” will allow that “Document of Hell” and the “Antichrist” to reign over them.

    YES FOLKS, your country is headed to “HELL”,

    but the lost of Rights/Freedom is for the good of the Children who escape the slaughterhouses.

Comments